Goto

Collaborating Authors

 moral machine


The Problem with the Trolley Problem and the Need for Systems Thinking

Communications of the ACM

The Trolley Problem has inspired scores of psychology experiments, including MIT's Moral Machine,1 an online survey where people had to decide what a self-driving car should do in case of an impending accident. Participants were given a series of pairs of scenarios, presented as map-like diagrams, with various numbers and types of pedestrians and passengers. For each pair of scenarios, they had to choose between options such as driving ahead and killing pedestrians, or veering into an obstacle and killing passengers. Based on 40 million responses from more than 200 countries, they found general preferences, such as sparing humans over animals. They also found differences between cultures. People from countries with collectivistic cultures prefer sparing lives of older people instead of the lives of younger people.


The Tricky Business of Computing Ethical Values

Slate

An expert in computing responds to Tara Isabella Burton's "I Know Thy Works." In 2018 researchers from the Massachusetts Institute of Technology Media Lab, Harvard University, the University of British Columbia, and Université Toulouse Capitole shared the results of one of the largest moral experiments conducted to date. They recorded 40 million ethical decisions from millions of people across 233 countries. The experiment's "Moral Machine" posed to users variations of the classic trolley problem, imagining instead the trolley as a self-driving car. Should the car swerve and collide with jaywalking pedestrians or maintain its current trajectory, which would yield inevitable doom for the passengers inside?


Machines Becoming Moral - Part 2 - Nigel Crook

#artificialintelligence

In my book'Rise of the Moral Machine: Exploring Virtue Through a Robot's Eyes', I include a short fictional story about a couple (Mr and Mrs Morales) who are in the process of purchasing their first autonomous vehicle. Having chosen the model, the colour and the trim of the car, the last set of choices they are required to make concern the vehicle's'ethical alignment': i.e. the alignment of the vehicle's autonomous decisions on how it should drive with the Morales' social and ethical preferences. Without giving too much of the story away, the Morales' are presented with a series of situations each of which requires the autonomous vehicle to make a moral decision. These decisions are presented in terms of choices of who should be the casualties of an unavoidable collision, such as "should the vehicle run over the pensioner on the pedestrian crossing, or the child on the pavement?" (Figure 1).


The moral machine: Who lives, who dies, you decide!

#artificialintelligence

Inevitably, you might find yourself stuck in a life-threatening situation where your car won't be able to stop in time to avoid a collision. It has a choice--either collide with one of the other vehicles endangering another passenger's life or put your life in harm's way. What do you think it would do? If we were driving a car in manual mode, whichever way we chose, it would be considered a reaction to the situation as opposed to a deliberate decision--an instinctual, potentially panicked reaction with no forethought or malice. However, if a programmer were to instruct the car to take the same call in a life-threatening situation, it could be interpreted as a premeditated homicide.


The challenge of making moral machines

#artificialintelligence

As applications for AIs proliferate, so are questions about ethical development and embedded bias.Credit: MF3d In the waning days of 2020, Timnit Gebru, an artificial intelligence (AI) ethicist at Google, submitted a draft of an academic paper to her employer. Gebru and her collaborators had analysed natural language processing (NLP), and specifically the data-intensive approach of training NLP artificial intelligences (AIs). Such AIs can accurately interpret documents produced by humans, and respond naturally to human commands or queries. In their study, the team found the process of training a NLP AI requires immense resources and creates a considerable risk of embedding significant bias into the AI. That bias can lead to inappropriate or even harmful responses.


Ethics in Robotics and Artificial Intelligence

#artificialintelligence

As robots are becoming increasingly intelligent and autonomous, from self-driving cars to assistive robots for vulnerable populations, important ethical questions inevitably emerge wherever and whenever such robots interact with humans and thereby impact human well-being. Questions that must be answered include whether such robots should be deployed in human societies in fairly unconstrained environments and what kinds of provisions are needed in robotic control systems to ensure that autonomous machines will not cause humans harms or at least minimize harm when it cannot be avoided. The goal of this specialty is to provide the first interdisciplinary forum for philosophers, psychologists, legal experts, AI researchers and roboticists to disseminate their work specifically targeting the ethical aspects of autonomous intelligent robots. Note that the conjunction of "AI and robotics" here indicates the journal's intended focus is on the ethics of intelligent autonomous robots, not the ethics of AI in general or the ethics of non-intelligent, non-autonomous machines. Examples of questions that we seek to address in this journal are: -- computational architectures for moral machines -- algorithms for moral reasoning, planning, and decision-making -- formal representations of moral principles in robots -- computational frameworks for robot ethics -- human perceptions and the social impact of moral machines -- legal aspects of developing and disseminating moral machines -- algorithms for learning and applying moral principles -- implications of robotic embodiment/physical presence in social space -- variance of ethical challenges across different contexts of human -robot interaction


Should we create moral machines?

#artificialintelligence

The development of artificial intelligence (AI) that can act autonomously has raised important questions about the nature of machine decision-making and its potential capabilities. Is it possible to implement an ethical dimension to autonomous machines, i.e., is ethics "computable"? Are autonomous machines capable of factoring moral considerations into their decisions? Can AI be programmed to know the difference between "right" and "wrong?" Once the topic of science fiction novels, these questions are leading to discussions about the actual creation of moral machines, also known as artificial moral agents, and have largely contributed to the expansion of the field of machine ethics.


How Enhanced AI Could Be Achieved Through Crowdsourcing Morality

#artificialintelligence

With the rapid acquaintance of artificial intelligence (AI) the qualms and questions about whether robots could act immorally or soon choose to harm humans have also been raised. Some people are calling to put bans on robotics research while others are calling to conduct more research to be aware of how AI might be controlled. But how can robots learn ethical and moral behavior if there is no "user manual" for being human? This question of robotic ethics is making everyone apprehensive. We are concerned about the lack of understanding and empathy in machines like how so-called'calculating machines' are going to know that what is wrong and how to do the right thing, and even how we are going to judge and penalize by beings of steel and silicon.


Moral Machines: Translation Suppliers and AI Ethics

#artificialintelligence

That said, service suppliers might still be concerned about potential bias within the datasets used to build automated translation solutions. The response could be that (post)editing is basically tasked with removing any traces of unwanted "bias" generated by an unthinking machine. It would, of course, be interesting to know whether we can teach the technology to automatically isolate potential bias (in the "social" sense) from semantic error in the industry sense of mistranslation. Or more subtly, could translating something accurately unwittingly induce a sentiment of bias for a given native speaker? Going forward, the pursuit of translation accuracy may require social inclusiveness in certain cases to address the emerging norms of new language user communities.


Crowdsourcing Moral Machines

Communications of the ACM

Robots and other artificial intelligence (AI) systems are transitioning from performing well-defined tasks in closed environments to becoming significant physical actors in the real world. No longer confined within the walls of factories, robots will permeate the urban environment, moving people and goods around, and performing tasks alongside humans. Perhaps the most striking example of this transition is the imminent rise of automated vehicles (AVs). They are expected to increase the efficiency of transportation, and free up millions of person-hours of productivity. Even more importantly, they promise to drastically reduce the number of deaths and injuries from traffic accidents.12,30 Indeed, AVs are arguably the first human-made artifact to make autonomous decisions with potential life-and-death consequences on a broad scale. This marks a qualitative shift in the consequences of design choices made by engineers. The decisions of AVs will generate indirect negative consequences, such as consequences affecting the physical integrity of third parties not involved in their adoption--for example, AVs may prioritize the safety of their passengers over that of pedestrians.